专利摘要:
This method (100) for detecting ground-based and moving targets in a video stream acquired by an airborne digital camera is characterized in that it comprises the steps of: processing (20 - 40) a plurality of successive frames in a manner stabilize the frames as if they had been acquired by a fixed camera; and comparing (50 - 60) two processed frames, temporally separated from one another, so as to identify moving pixel areas from one frame to another, the moving pixel areas constituting detected targets .
公开号:FR3047103A1
申请号:FR1600125
申请日:2016-01-26
公开日:2017-07-28
发明作者:Gilles Guerrini;Fabien Camus;Fabien Richard
申请人:Thales SA;
IPC主号:
专利说明:

METHOD FOR DETECTING TARGETS ON THE GROUND AND MOVING IN A VIDEO STREAM ACQUIRED BY AN AIRBORNE CAMERA
The present invention relates to methods of detecting ground and moving targets in a video stream acquired by an airborne camera.
An airborne camera is a mobile camera in relation to the ground, not only because the aircraft carrying the camera moves in relation to the ground, but also because an operator controls, for example from a ground station, the movements of the camera relative to the aircraft, so as to observe a particular area overflown by the latter. The acquired video stream is transmitted, in real time, to the ground station for analysis.
The detection of ground movements of any type of vehicle (military vehicle, car, two-wheelers, etc.) is essential information to extract from the video stream.
The automatic detection of vehicles on the ground and in motion in a video stream acquired by a fixed camera, for example rigidly mounted on a mast implanted in the environment, is known. The fact that the camera is fixed with respect to the ground makes it possible to abstract the scenery of the observed scene and to deal only with the portions of the image that evolve from one frame of the video stream to the next, and which consequently represent potential targets.
For a video stream acquired by a mobile camera, the detection of portions of the image that change from one frame to another can be performed automatically by implementing a Harris procedure. Such a procedure consists, firstly, in applying an algorithm for identifying the remarkable points on an image of the video stream, then, in a second step, in applying a reconstruction algorithm to associate remarkable points identified in the image. considered, so as to delimit portions of the image that correspond to an observed object. The evolution of these portions from one image to another makes it possible to determine whether an object is moving.
However, such an algorithm is not sufficient to discriminate targets of small sizes, in this case vehicles observed remotely by an airborne camera.
But above all, such an algorithm requires a high computation time, especially for the association of remarkable points so as to define objects. Such a calculation time is not compatible with a real-time analysis of the acquired video stream.
Thus, currently, the video stream acquired by an airborne camera is displayed on a screen of the ground station and the operator visually analyzes the succession of images to try to recognize objects in motion. The operator may be forced to spend several hours performing this visual analysis. Its attention can not be maintained permanently, so target detection in this way is not always effective. The purpose of the invention is therefore to overcome this problem, notably by proposing a method offering an assistance to the operator by automatically detecting, in real time, in the video stream acquired by an on-board camera, the targets constituted by moving objects. relative to the scenery and presenting these potential targets appropriately to the operator. The subject of the invention is a method for detecting ground-based and moving targets in a video stream acquired by an airborne digital camera, characterized in that it comprises the steps of: processing a plurality of successive frames so as to stabilize the frames as if they had been acquired by a fixed camera; and comparing two processed frames, temporally separated from one another, so as to identify the areas of moving pixels from one frame to another, the moving pixel areas constituting detected targets.
The method according to the invention allows the detection of ground-based and moving targets (especially small targets in terms of number of pixels in an acquired image) in a raw video stream from a camera carried by an aircraft, the video stream being dynamic in the sense that the orientation and / or the enlargement of the camera relative to the scenery changes during the shooting.
The method is based on the possibility of differentiating between a ground point and a point of an object moving relative to the ground, from the determination of their relative displacement, which evolves distinctly both in the direction and in intensity.
According to particular embodiments, the method comprises one or more of the following characteristics, taken separately or in any technically possible combination: the step of processing a plurality of successive frames comprises a step of correcting a relative parallax error assigning each frame of said plurality of successive frames so as to obtain, for each frame, a corrected frame, this step implementing an algorithm for determining a projective transformation for passing from the frame at a current instant to a corrected frame associated with a frame at a previous sampling time. the step of processing a plurality of successive frames comprises a step of determining a vector displacement of the camera relative to the ground, this step implementing an algorithm for determining an optical flow to pass a frame corrected current to a corrected frame passed away from the current corrected frame of an integer s of sampling instants. the algorithm for determining an optical flow uses a homogeneous grid of points. the integer is selected between 5 and 15, preferably equal to 10. Once the displacement vector of the camera is determined with respect to the ground, a corresponding transformation is applied to the current corrected frame to compensate for the effect of the displacement of the camera. camera and obtain a final frame superimposed on the corrected frame passed as the initial frame. the step of comparing two frames consists of comparing the final and initial frames with each other by successively realizing the following substeps of: subtract pixel by pixel the initial frame of the final frame so as to obtain, for each pixel, a color distance value; applying a color distance threshold, each pixel having a color distance value less than or equal to said threshold, taking the null value and each pixel having a color distance value greater than said threshold taking the unit value, so as to obtain a bitmap; and applying a contour determination algorithm on the bitmap to group the unit value pixels into moving pixel areas. the bit map obtained at the end of the sub-step of applying a threshold being an intermediate bitmap, the following sub-steps are performed: applying a morphological erosion transformation using a suitable mask; application of a morphological transformation of dilation using the mask of the morphological transformation of erosion. the method comprises an additional step of checking the coherence of the zones of moving pixels identified at the end of the step of comparing two frames, consisting in determining, for each zone in motion, a correlation index between the values of the pixels of the two frames, a moving area being considered to correspond to a target when the correlation index is close to the value -1. a moving pixel area identified at the end of the comparing step is suitably displayed superimposed on the video stream displayed on a control screen. the video stream is a video stream in the visible spectral domain or Infra-
Red. the method is executed in real time or in constrained time on the video stream. the method allows the detection of small targets, of the order of a few pixels. The invention also relates to an information recording medium comprising the instructions of a computer program adapted to be executed by a computer for implementing a method for detecting ground and moving targets in a stream. video acquired by an airborne digital camera according to the preceding method. The invention and its advantages will be better understood on reading the detailed description which follows of a particular embodiment, given solely by way of non-limiting example, this description being made with reference to the appended drawings in which: - Figure 1 is a diagram of the system in which the method according to the invention is implemented; FIG. 2 is a block diagram of the method according to the invention; FIG. 3 is a current frame on which the displacement vectors determined during the process of FIG. 2 have been superimposed; FIG. 4 represents the application of an erosion mask and then of an expansion mask in accordance with the method of FIG. 2; and, - Figure 5 shows different pixel matrices at different stages of the process of Figure 2.
As shown diagrammatically in FIG. 1, the method according to the invention makes it possible to detect a target 2, of the type constituted by a vehicle moving on the surface of the ground 1, in a video stream acquired by a camera 4 carried by a aircraft 6.
The camera 4 allows the acquisition of images in the optical or infrared field.
The camera 4 is a digital camera, so that each acquired image is a matrix of NxM pixels, referred to in the following frame.
The acquired video stream comprises for example 24 frames per second, but other acquisition frequencies are possible.
During the acquisition, the aircraft 6 is movable relative to the ground in six degrees of freedom.
During the acquisition, the camera 4 is movable relative to the aircraft 6. For example, the camera being fixed under the aircraft 6, it can be moved in two degrees of angular freedom. The camera 6 also has a degree of freedom in magnification, for zooming on an area of interest on the ground.
Thus, during the shooting, the camera defined by its optical center C, its optical axis A, and an axis orthogonal to the optical axis B (for orienting the image plane of the camera) moves relative to on the ground 1 according to three Cartesian coordinates X, Y and Z and three angular coordinates. To these displacements in space, it is necessary to add the possibility of a variation of enlargement w.
The video stream is transmitted to a ground station 10 by suitable telemetry means 9.
The station 10 comprises at least one computer 12 comprising calculating means, such as a processor, and storage means, such as random access memories and dead memory, the storage means storing the computer program instructions adapted to be executed by the calculation means. In particular, the storage means store a program allowing, during its execution by the calculation means, to implement the method 100 according to the invention on a video stream. Preferably, the program is executed in real time on the raw video stream received.
The method 100 for detecting ground-based and moving targets in a video stream acquired by an airborne camera will now be described with reference to FIG. 2. STEP 10 of acquisition of a video stream
During the flight of the aircraft 6, the camera 4 performs an acquisition of a plurality of successive frames. The corresponding video stream is transmitted to the ground station 10.
The video stream received by the ground station 10 is decomposed into a frame T. Each frame is temporally tagged and stored in the storage means of the computer 12.
If T (0) is the frame at the current time to, T (-1) is the previous frame, separated by a time step equal to the inverse of the acquisition frequency, f, and T (- s) is the frame passed at the instant Ls which is separated from the current instant to of a duration s times the step of time 1 / f, s being a natural integer.
To enable the application of the method according to the invention in real time, approximately one frame out of two of the initial video stream is taken into account. Thus the frequency f of the frames used is for example 12 frames per second. It will be seen that the method according to the invention uses a plurality of successive frames, which are not necessarily consecutive frames of the video stream. In addition, the duration between two successive frames is not necessarily constant. STEP 20 for parallax error correction affecting each frame
The first processing step aims to transform a set of successive frames so as to bring them back into a common reference plane, fixed relative to the ground. Advantageously, this common reference plane is constituted by the coordinate plane of a frame taken as a reference, for example the frame T (-s) at the instant ts.
This transformation is intended to correct the relative parallax error introduced by the displacement of the camera during the duration s, between the different frames. A priori, the reference plane does not correspond to the plane XY of the ground 1, that is to say that the plane of acquisition of the image at the instant Ls is not parallel to the plane XY of the surface of the ground. Thus, after transformation, a common residual parallax error will affect all the frames considered. However, if instead of identifying the speed of a target, we are interested in the simple displacement of this target, this residual parallax error has no consequence.
It should be noted that this correction step also makes it possible to correct the effects of varying the enlargement w of the camera and of varying the altitude Z of the camera between the various frames considered.
The corrected frame, resulting from the correction of the relative parallax error of the frame T (-i) with respect to the frame T (-s), is denoted by F (-i).
It is then a question of treating the current frame T (0) to correct the relative parallax error.
The correction of the relative parallax error is a matter of projective transformations, including translations, Euclidean transformations (that is, rotations in a plane), similarities (that is, changes in scale), affine transformations and projective transformations, as well as combinations of these transformations.
To determine the projective transformation to be applied to the current frame T (0) to obtain the corrected current frame F (0) corresponding, the frame T (0) is compared with the previous corrected frame F (-1), which is already brought back to the common reference plane.
This comparison must make it possible to determine the projective transformation M making it possible to pass coordinates of the current frame T (0) to those of the corrected previous frame F (-1).
The matrix of the projective transformation M is defined by:
(2.7) with x and y the coordinates of a pixel, or point p, in the current frame T (0); x 'and y' the coordinates of the corresponding point p 'in the preceding corrected frame F (-1),
and w 'a scale factor to compensate for the vertical displacement of the wearer and / or the enlargement of the camera.
To determine the matrix of the projective image transformation M, it is necessary to identify at least q points p, present in the current frame T (0) and found at the points p 'in the preceding corrected frame F (- 1).
To do this, we apply an algorithm for identifying remarkable points on each of the two frames considered and an algorithm for matching similar remarkable points between these two frames. For example, the identification algorithm corresponds to that implemented in the HARRIS procedure, which is not greedy in terms of computation time. Moreover, the matching algorithm is for example a simple algorithm making it possible to compare the neighborhoods between a remarkable point Pi and one of the remarkable points p'j and to select as the point p ', corresponding to the point pi the point which ends the criterion used.
A pip'j displacement vector is then determined for each pair of similar points between the frames T (0) and F (-1).
To find the matrix of the projective image transformation M from the pairs (pj; p'j) of similar points, the system of equations must be solved on q pairs of points, q being at least four. M * p = p M * p2 = p £ 2 M * pn = Pn
What can be written in the form, MP = Pt & (MP) T = Ρ * & PTMT = P * (2.9)
With:
With the least squares method, we show that the solution is given by:
Mr - = = 0tg! Tt§Pg'f (2.Kl)
The application of the matrix thus calculated to all the pixels of the current frame T (0) makes it possible to obtain the corrected current frame F (0). STEP 30 for determining the displacement of the camera with respect to the decor
The determination of the displacement of the camera with respect to the ground 1 during the duration s is performed by comparing the corrected frame passed F (-s) and the current corrected frame F (0).
The value of the integer s is chosen large enough to be able to observe a displacement of the points of the ground between the two compared frames.
On the other hand, the value of the integer s is chosen small enough not only to find a potential target on the two frames compared, but especially so that the algorithms used can converge relatively quickly.
The value of the integer s is between 5 and 15 and is preferably equal to 10.
In aerial photography, most of the image is made up of ground points 1 and not points of moving objects.
A grid of points Pi (-s) is placed on the corrected frame passed F (-s). The points of the grid are therefore mainly points corresponding to points of the ground. The evaluation of the displacement of these points, or optical flow, of the corrected frame passed F (-s) towards the current corrected frame F (0) will make it possible to estimate the displacement vector v, in intensity and in direction, of the camera relative to the ground (that is to say to the speed vector resulting from the combination of the relative movements of the camera 4 with respect to the aircraft 6 and the aircraft 6 with respect to the ground 1, the error of residual parallax near and near the enlargement).
The calculation of the optical flow at each point pi (-s) of the grid is carried out by implementing for example an algorithm called Lucas and Kanade.
This algorithm assumes that the displacement of a point Pi (-s) of the grid between the frames F (-s) and F (0) is small and that this displacement is approximately constant for every point p belonging to a neighborhood of the point pi (-s) considered.
A study of the characteristics of the pixels around the point pi (-s) and the search for these characteristics around points p not far from the point pi (-s) in the corrected current frame F (0) makes it possible to determine the point p, ( 0) of the corrected current frame F (0) corresponding to the point p, (- s) of the corrected past frame F (-s). The optical flow at the point Pi (-s) is then given by the displacement vector connecting the points Pi (-s) and Pi (0).
FIG. 3 shows a corrected past frame F (-s) on which the displacement vectors of the obtained optical stream have been superimposed by comparison with a corrected current frame F (0).
This method is a local method, which makes it possible to obtain displacement vectors for each of the points of the grid.
The maximum of the distribution of the intensity of the displacement vectors and the maximum of the distribution of the orientation of the displacement vectors in the XY plane constitute an estimate respectively of the intensity and the direction in the XY plane of the displacement vector v of the camera 4 compared to the decor.
Preferably, the displacement vector v is determined from the distribution of the displacement vectors by the implementation of a RANSAC algorithm. Such an algorithm makes it possible to estimate the displacement vector v iteratively by progressively eliminating the displacement vectors associated with points of the grid that correspond to moving objects or to measurement errors.
It should be noted that the space between the grid and the margin of the frame is configurable, so as to avoid that points of the grid do not go beyond the frame of the frame between times ts and t0 and thus falsify the calculation. It is actually to "cut" the border of the frame to be certain to find the central vignette of the corrected past frame F (-s) in the corrected current frame F (0).
Moreover, such a parameterization makes it possible to be able to control the number of points constituting the grid and, consequently, to be able to make a compromise between the calculation time and the quality of the estimate: the more the number of points of the grid is small, this step will converge quickly; the higher the number of points in the grid, the more precise the estimation of the displacement vector v will be.
Since this step takes advantage of the fact that the image essentially comprises points of the ground, a grid of points distributed homogeneously can be used. This saves time, since it is not necessary to calculate a specific grid based for example on Harris regions to isolate the portions of this image that correspond to the ground.
This algorithm is effective because the grid of points allows an over-representation of the points of the self and thus an identification of the maxima of the distribution of the vectors displacement like the displacement of the camera with respect to the ground. STEP 40 compensation for moving the camera
The displacement vector v of the camera relative to the decoration makes it possible to construct a compensation matrix V, of the translational matrix type, to compensate the displacement of the camera between the initial instant t_s acquisition of the frame F (-s) and the instant current to final acquisition of the frame F (0).
The compensation matrix V is then applied to the current corrected frame F (0), so as to obtain a final current frame F '(0) directly superimposable to that of the corrected past frame F (-s) or initial frame. In other words, the frame F '(0) is now in the repository of the frame F (-s) and everything passes as if the frames F' (0) and F (-s) had been acquired by a static camera.
We must now detect moving regions by comparing these two frames of observation of the same scene, from the same point of view, but at two different times. STEP 50 Target Identification The step of identifying moving targets is broken down into a series of sub-steps.
In a subtraction step 52, an absolute difference of the two initial F '(0) and final F (-s) frames is made to determine, for each pixel, a color distance value. This value represents the state changes of the pixels between the two compared frames.
In practice, the noise of the sensor of the camera 4 alters these values, that is to say that two frames will never be identical even in the case of a static scene. We must therefore determine a threshold m below which we consider that the color distance value corresponds to a simple background noise and above which it is considered that this value corresponds to a motion information. The threshold m is configurable.
Thus, in thresholding step 54, for each pixel, if the color distance value is below this threshold, the value of this pixel is reduced to the null value. On the other hand, if the value of the pixel is greater than this threshold, the value of this pixel is set to the unit value. An intermediate bitmap is thus obtained.
In substep 56, erosion of factor n is applied to eliminate the parasites that may have affected the frame acquisition and which on the intermediate bitmap are similar to a Dirac pulse. To eliminate them, we apply a morphological erosion operator E on the intermediate bitmap, which corresponds to the application of a bit mask on each pixel of the bitmap, as shown in Figure 4. For each pixel of the bitmap having a positive value, if the four immediately adjacent pixels also have a positive value, then the value of the pixel considered remains at the value unit 1, otherwise it will be set to 0. Erosion therefore allows eliminate the areas of the intermediate bitmap having a reduced size, of the order of one pixel, and to cut those larger, of the order of a few pixels.
In sub-step 58, to counter the second effect of erosion, a morphological operator D expansion opposite the erosion operator E is applied to the eroded bitmap. The same mask as that used in step 56 is used: for each pixel of the eroded binary card having a positive value, the four immediately adjacent pixels are modified to take the unit value, regardless of their initial value.
As a variant, other types of masks than the cross mask of FIG. 4 can be used, for example in square, in a circle, etc. It will be chosen according to the type of zone of the intermediate bitmap on which it is applied. The size m of the mask is also an adjustable parameter to increase or decrease its radius of influence.
For the detection of moving objects of very small size, erosion will have a significant impact on the minimum size (in number of pixels) from which detection of a target will be possible. We will therefore take care to choose a mask size smaller than the size in pixels of moving objects that we want to detect.
Finally, a binary card CB is obtained with, in positive value, the moving pixels. Such a bitmap is shown in FIG. 5. STEP 60 determination of moving pixel areas In step 60, a contour detection algorithm is applied to the bitmap so as to group the moving pixels corresponding to one and the same physical object for defining moving areas within the CB bitmap.
The result of this step is a raw list L of moving areas defined by their contour, each contour being itself a list of 2D coordinates of pixels. STEP 70 Verification of Results
A verification step of the result obtained in step 60 is performed to eliminate false targets, or false alarms.
To do this, a correlation calculation is performed between the initial and final frames, F (-s) and F '(0), on each of the zones in motion determined at the end of step 60.
It is a question of calculating a correlation index between the evolution curves of the intensity of the pixels of each of the frames on the same moving zone.
When the correlation index is between -0.5 and 0.5, it is considered that there is no correlation between the frames on the moving area. It is in fact an area that does not evolve in the same way between the two frames but does not correspond to a moving object. For example, it may be a tree moving slightly under the influence of wind, or the apparent movement of a prominent object with respect to its background on the image. In this case, the considered zone is rejected from the list L of contours.
If the result is between 0.5 and 1, the zone is considered to be similar from one frame to another. The area is not actually moving. It is therefore rejected from list L.
If the result is between -1 and 0.5, the intensity curves of the two frames have a different and opposite evolution, which confirms the displacement of an object able to modify the properties of the considered zone of a frame to the other. The corresponding area is kept as a moving area in the L list.
This step 70 makes it possible to eliminate false alarms from the raw list L and to obtain a list "cleaned".
Step 80 of display.
The moving zones of the list L 'are superimposed on the current frame T (0) displayed on the screen of the computer 12. The moving zones are for example identified by a polygon containing in full the corresponding moving zone. . This polygon allows the operator to focus on the portion of the delimited image to identify a target.
A polygon at the current time t0 is displayed not only on the current frame T (0) displayed on the operator's screen, but also on the following frames of the video stream displayed on the operator's screen, and this until a new frame of the video stream, T (+ s) for example, face the object of a target detection by the implementation of the present method.
The polygons calculated at time t + s are then taken into account. In known manner, by the implementation of matching algorithms, a polygon at time t + s is displayed as a replacement for a polygon at time t0 if they correspond, with a high probability, to a same target; a polygon at time t + s is displayed as a new polygon if it can not be matched to a polygon at time to and if it corresponds, with a high probability, to a new target detected in the field the camera; a polygon at instant to continues to be displayed if it can not be matched with a polygon at time t + s and it corresponds with a high probability to a hidden target, that is to say which would not be detected in the frame T (+ s).
VARIATIONS
The points selected to correct the parallax error or to estimate the displacement of the camera from the ground must belong to the XY plane of the ground with a limited error. In practice, it often happens that the calculations made in the corresponding steps are altered by outliers. To limit this effect, the Random Sample Consensus (RANSAC) method allows iterative elimination of these outliers until a model adapted to the data set is found.
The processing of the frames to recover in a static acquisition comprises two major steps, the correction of the relative parallax error and the estimation of the displacement of the camera relative to the ground. In a variant, these two steps are carried out in a single operation, the evolution of the optical flow for the points of the ground giving information on both the speed of the camera relative to the ground and the relative parallax error between the frames.
ADVANTAGES
The method according to the invention makes it possible to detect targets of small sizes (at least 3 × 3 pixels) and in a reactive manner (close to 40 ms between the appearance of the object on the frames and its detection).
Advantageously, the method according to the invention makes it possible to detect targets in a non-georeferenced video stream. In addition, the position information of the carrier, for example determined by means of a GPS satellite location system, are not used.
权利要求:
Claims (14)
[1" id="c-fr-0001]
1. - Method (100) for detecting ground targets and moving in a video stream acquired by an airborne digital camera (4), characterized in that it comprises the steps of: - treating (20 - 40) a a plurality of successive frames so as to stabilize the frames as if they had been acquired by a fixed camera; and, comparing (50 - 60) two processed frames, temporally separated from one another, so as to identify the areas of moving pixels from one frame to another, the moving pixel areas constituting detected targets.
[2" id="c-fr-0002]
The method of claim 1, wherein the step of processing a plurality of successive frames comprises a step (40) of correcting a relative parallax error affecting each frame (T (0)) of said plurality of frames. successive frames so as to obtain, for each frame (T (0)), a corrected frame (F (0)), this step implementing an algorithm for determining a projective transformation enabling the frame (T ( 0)) at a current time (t0) to a corrected frame (F (-1)) associated with a frame (T (-1)) at a previous sampling instant (t 1) ·
[3" id="c-fr-0003]
3. - The method of claim 2, wherein the step of processing a plurality of successive frames comprises a step (30) for determining a displacement vector (v) of the camera (4) relative to the ground (1). ), this step implementing an algorithm for determining an optical stream for passing from a current corrected frame (F (0)) to a past corrected frame (F (-s)) separated from the current corrected frame of an integer s of sampling times.
[4" id="c-fr-0004]
4. - Method according to claim 3, wherein the algorithm for determining an optical flow uses a homogeneous grid of points.
[5" id="c-fr-0005]
5. - Method according to claim 3 or claim 4, wherein the integer s is selected between 5 and 15, preferably equal to 10.
[6" id="c-fr-0006]
6. - Method according to any one of claims 3 to 5, wherein once determined the displacement vector (v) of the camera (4) relative to the ground (1), a corresponding transformation (V) is applied to the current corrected frame (F (0)) to compensate for the effect of moving the camera and to obtain a final frame (F '(0)) superimposed on the past corrected frame (F (-s)) as the initial frame.
[7" id="c-fr-0007]
7. - The method of claim 6, wherein the step of comparing two frames consists of comparing the final frames (F '(0)) and initial (F (-s)) between them by performing successively the substeps following: - subtract pixel by pixel the initial frame (F (-s)) of the final frame (F '(0)) so as to obtain, for each pixel, a color distance value; applying a color distance threshold, each pixel having a color distance value less than or equal to said threshold, taking the zero value and each pixel having a color distance value greater than said threshold taking the unit value, so as to obtain a bitmap ; and, - applying a contour determination algorithm on the bitmap to group the unit value pixels into moving pixel areas.
[8" id="c-fr-0008]
8. - Method according to claim 7, wherein, the bitmap obtained at the end of the substep of application of a threshold being an intermediate bitmap, the following substeps are carried out: - application of a morphological transformation of erosion using a suitable mask; - application of a morphological transformation of dilation using the mask of the morphological transformation of erosion.
[9" id="c-fr-0009]
9. - Method according to any one of the preceding claims, wherein the method comprises an additional step (70) of checking the coherence of the moving pixel areas identified at the end of the step of comparing two frames, determining, for each moving zone, a correlation index between the values of the pixels of the two frames, a moving area being considered to correspond to a target when the correlation index is close to the value -1.
[10" id="c-fr-0010]
10. - Method according to any one of the preceding claims, wherein a moving pixel area identified at the end of the comparison step is displayed in a suitable manner superimposed on the video stream displayed on a screen ( 12) control.
[11" id="c-fr-0011]
11. - Method according to any one of the preceding claims, wherein the video stream is a video stream in the visible or infrared spectral range.
[12" id="c-fr-0012]
12. - Method according to any one of the preceding claims, executed in real time or constrained time on the video stream.
[13" id="c-fr-0013]
13. - Method according to any one of the preceding claims, allowing the detection of small targets, of the order of a few pixels.
[14" id="c-fr-0014]
14. - Information recording medium comprising the instructions of a computer program adapted to be executed by a computer for implementing a method for detecting ground-based and moving targets in a video stream acquired by a camera (4) digital airborne according to any one of claims 1 to 13.
类似技术:
公开号 | 公开日 | 专利标题
EP3200153B1|2018-05-23|Method for detecting targets on the ground and in motion, in a video stream acquired with an airborne camera
WO2015166176A1|2015-11-05|Method of tracking shape in a scene observed by an asynchronous light sensor
EP0477351B1|1995-02-08|Process and device for modifying a zone of successive images
EP2455916B1|2017-08-23|Non-rigid tracking-based human-machine interface
EP3401879A1|2018-11-14|Method for modelling a three-dimensional object from two-dimensional images of the object taken from different angles
CN109272530B|2020-07-21|Target tracking method and device for space-based monitoring scene
FR2882160A1|2006-08-18|Video image capturing method for e.g. portable telephone, involves measuring local movements on edges of images and estimating parameters of global movement model in utilizing result of measurement of local movements
EP2549434A2|2013-01-23|Method of modelling buildings from a georeferenced image
EP2257924B1|2021-07-14|Method for generating a density image of an observation zone
EP3321861A1|2018-05-16|Drone comprising a device for determining a representation of a target via a neural network, related determination method and computer program
You et al.2014|Raindrop detection and removal from long range trajectories
KR101080375B1|2011-11-04|Method for learning object, and method for tracking object using method for tracking object, and system for learning and tracking object
WO2015071457A1|2015-05-21|Method for estimating the speed of movement of a camera
EP3572976A1|2019-11-27|Method for processing a video image stream
EP3271869B1|2019-10-16|Method for processing an asynchronous signal
FR3042283B1|2019-07-19|METHOD OF PROCESSING RADAR IMAGE OF SAR TYPE AND METHOD OF DETECTING TARGET THEREOF
FR3054897A1|2018-02-09|METHOD FOR PRODUCING DIGITAL IMAGE, COMPUTER PROGRAM PRODUCT AND OPTICAL SYSTEM THEREOF
Xu et al.2017|A new shadow tracking method to locate the moving target in SAR imagery based on KCF
FR3065097B1|2019-06-21|AUTOMATED METHOD FOR RECOGNIZING AN OBJECT
Van Eekeren2009|Super-resolution of moving objects in under-sampled image sequences
WO2016151103A1|2016-09-29|Method for counting objects in a predetermined spatial area
EP3072110B1|2018-04-04|Method for estimating the movement of an object
EP3707676A1|2020-09-16|Method for estimating the installation of a camera in the reference frame of a three-dimensional scene, device, augmented reality system and associated computer program
Hope et al.2012|Numerical restoration of imagery obtained in strong turbulence
WO2018185104A1|2018-10-11|Method for estimating pose, associated device, system and computer program
同族专利:
公开号 | 公开日
EP3200153B1|2018-05-23|
FR3047103B1|2019-05-24|
EP3200153A1|2017-08-02|
US20170213078A1|2017-07-27|
US10387718B2|2019-08-20|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US5629988A|1993-06-04|1997-05-13|David Sarnoff Research Center, Inc.|System and method for electronic image stabilization|
WO2009067819A1|2007-11-30|2009-06-04|Searidge Technologies Inc.|Airport target tracking system|
JP2013142636A|2012-01-11|2013-07-22|Mitsubishi Electric Corp|Infrared target detector|
US20140118716A1|2012-10-31|2014-05-01|Raytheon Company|Video and lidar target detection and tracking system and method for segmenting moving targets|
CN104408725A|2014-11-28|2015-03-11|中国航天时代电子公司|Target recapture system and method based on TLD optimization algorithm|
US7557832B2|2005-08-12|2009-07-07|Volker Lindenstruth|Method and apparatus for electronically stabilizing digital images|
US7999849B2|2006-05-17|2011-08-16|The Boeing Company|Moving object detection|
US7633383B2|2006-08-16|2009-12-15|International Business Machines Corporation|Systems and arrangements for providing situational awareness to an operator of a vehicle|
JP5131224B2|2009-02-19|2013-01-30|沖電気工業株式会社|Moving picture decoding apparatus, method and program, moving picture encoding apparatus, method and program, and moving picture encoding system|
US8922498B2|2010-08-06|2014-12-30|Ncr Corporation|Self-service terminal and configurable screen therefor|
JP2013059016A|2011-08-12|2013-03-28|Sony Corp|Image processing device, method, and program|
US9613510B2|2013-02-05|2017-04-04|Honeywell International Inc.|Apparatus and method for rapid human detection with pet immunity|
EP3246776B1|2014-05-30|2020-11-18|SZ DJI Technology Co., Ltd.|Systems and methods for uav docking|
US9767572B2|2015-05-01|2017-09-19|Raytheon Company|Systems and methods for 3D point cloud processing|
US9787900B2|2015-12-16|2017-10-10|Gopro, Inc.|Dynamic synchronization of frame rate to a detected cadence in a time lapse image sequence|
US9811946B1|2016-05-30|2017-11-07|Hong Kong Applied Science and Technology Research Institute Company, Limited|High resolution panorama generation without ghosting artifacts using multiple HR images mapped to a low resolution 360-degree image|WO2017024358A1|2015-08-13|2017-02-16|Propeller Aerobotics Pty Ltd|Integrated visual geo-referencing target unit and method of operation|
US10227119B2|2016-04-15|2019-03-12|Lockheed Martin Corporation|Passive underwater odometry using a video camera|
FR3050539B1|2016-04-22|2020-03-13|Thales|METHOD FOR OPTIMIZING PICTURES TAKEN BY AN AIRPORT RADAR IMAGING DEVICE, AND MISSION SYSTEM IMPLEMENTING SUCH A METHOD|
EP3531376B1|2018-02-21|2020-09-30|Ficosa Adas, S.L.U.|Calibrating a camera of a vehicle|
CN109691090A|2018-12-05|2019-04-26|珊口(深圳)智能科技有限公司|Monitoring method, device, monitoring system and the mobile robot of mobile target|
法律状态:
2017-01-31| PLFP| Fee payment|Year of fee payment: 2 |
2017-07-28| PLSC| Publication of the preliminary search report|Effective date: 20170728 |
2018-01-31| PLFP| Fee payment|Year of fee payment: 3 |
2019-01-30| PLFP| Fee payment|Year of fee payment: 4 |
2020-10-16| ST| Notification of lapse|Effective date: 20200910 |
优先权:
申请号 | 申请日 | 专利标题
FR1600125A|FR3047103B1|2016-01-26|2016-01-26|METHOD FOR DETECTING TARGETS ON THE GROUND AND MOVING IN A VIDEO STREAM ACQUIRED BY AN AIRBORNE CAMERA|
FR1600125|2016-01-26|FR1600125A| FR3047103B1|2016-01-26|2016-01-26|METHOD FOR DETECTING TARGETS ON THE GROUND AND MOVING IN A VIDEO STREAM ACQUIRED BY AN AIRBORNE CAMERA|
US15/405,332| US10387718B2|2016-01-26|2017-01-13|Method for detecting targets on the ground and in motion, in a video stream acquired with an airborne camera|
EP17152498.6A| EP3200153B1|2016-01-26|2017-01-20|Method for detecting targets on the ground and in motion, in a video stream acquired with an airborne camera|
[返回顶部]